Anh Nguyen, Amira Bendjama, Hong Doan
The field of data science has experienced remarkable growth in recent years, with organizations across diverse industries recognizing the value of data-driven decision making. According to an article by 365 Data Science, the US Bureau of Labor Statistics estimated that the employment rate for data scientists will grow by 36% from 2021 to 2031. This rate is significantly higher than the average growth rate of 5%, indicating substantial growth and demand for data science talent. The surging demand for data science presents both opportunities and challenges for job seekers, particularly recent graduates. One of the significant hurdles they face is the lack of salary transparency in the data science job market. This opacity creates uncertainty regarding compensation and hinders job seekers' ability to negotiate fair salaries.
There are significant variations in data science salaries across different industries and locations. For instance, according to Zippia, data scientists working in the finance and technology sectors tend to earn higher salaries compared to those in other industries. Similarly, the geographical location also plays a crucial role in determining salaries. Large cities with higher concentration of tech companies and living costs such as San Francisco and New York offer higher salaries than smaller cities.
The discrepancies in data science salaries can also be attributed to various factors, including job responsibilities, experience level, educational background, and specific skill sets. A study conducted by Burtch Works, a leading executive recruiting firm, found that data scientists with advanced degrees, such as Ph.D., tend to command higher salaries compared to those with bachelor's or master's degrees. Similarly, professionals with expertise in specialized areas, such as machine learning or natural language processing, often earn higher salaries due to the high demand for these skills.
According to a report surveyed 1,000 US-based full-time employees, conducted by Visier, 79% of all survey respondents want some form of pay transparency and 32% want total transparency, in which all employee salaries are publicized. However, the 2022 Pay Clarity Survey by WTW found that only 17% of companies are disclosing pay range information in U.S. locations where not required by state or local laws. For the states that have pay transparency laws such as Colorado and New York, there has been a decline in job postings since the law went into effect. Some employers comply with the new laws by expanding the salary ranges, sometimes to ridiculous lengths. These statistics highlight the lack of pay transparency not only in the field of data science, but across multiple job markets. Job seekers often struggle to estimate salaries for data science positions due to the scarcity of reliable information.
To address this problem, our project aims to develop a predictive model that estimates the salary for data science jobs. By leveraging publicly available data and employing machine learning algorithms, we seek to provide job seekers a better understanding of salary expectations within the data science job market and empower them to negotiate fair and competitive compensation packages.\
#install.packages("rpart.plot")
#install.packages("ggplot2")
#install.packages("e1071")
# Install the plotly package
#install.packages("plotly")
# Read the first CSV file
data1 <- read.csv("ds_salaries_2023.csv")
# Read the second CSV file excluding the first column
data2 <- read.csv("ds_salaries.csv")[,-1]
# Append rows from data2 to data1
combined_data <- rbind(data2, data1)
# Write the combined data to a new CSV file
write.csv(combined_data, "combined_salaries.csv", row.names = FALSE)
library(ggplot2)
ds_salaries <- read.csv("combined_salaries.csv")
summary(ds_salaries)
work_year experience_level employment_type job_title salary
Min. :2020 Length:4362 Length:4362 Length:4362 Min. : 4000
1st Qu.:2022 Class :character Class :character Class :character 1st Qu.: 93918
Median :2022 Mode :character Mode :character Mode :character Median : 135000
Mean :2022 Mean : 209246
3rd Qu.:2023 3rd Qu.: 180000
Max. :2023 Max. :30400000
salary_currency salary_in_usd employee_residence remote_ratio company_location
Length:4362 Min. : 2859 Length:4362 Min. : 0.0 Length:4362
Class :character 1st Qu.: 90000 Class :character 1st Qu.: 0.0 Class :character
Mode :character Median :130000 Mode :character Median : 50.0 Mode :character
Mean :134054 Mean : 49.7
3rd Qu.:173000 3rd Qu.:100.0
Max. :600000 Max. :100.0
company_size
Length:4362
Class :character
Mode :character
head(ds_salaries,5)
This dataset has 607 rows and 12 columns
We want to focus on “USD” currency so we keep the “salary_in_usd” column and drop “salary_currency” and “salary” column by using subset()
ds_salaries <- subset(ds_salaries, select = -c( salary_currency, salary))
head(ds_salaries, 5)
num_null_rows <- sum(rowSums(is.na(ds_salaries)) == ncol(ds_salaries))
print(num_null_rows)
[1] 0
There are no null values
repeated_entries <- subset(ds_salaries, duplicated(ds_salaries))
print(repeated_entries)
There are 42 duplicate rows
# Remove duplicate rows
df <- ds_salaries[!duplicated(ds_salaries), ]
# check again
repeated_entries_new <- subset(df, duplicated(df))
print(repeated_entries_new)
Adding new column to split our salaries into three groups Low , High, Medium.The approach is to use Percentiles by Dividing the dataset based on them. Hence, we are classifying salaries below the 25th percentile as “Low”, salaries between the 25th and 75th percentile as “Medium”, and salaries above the 75th percentile as “High”.
# adding new column
# Calculate the percentiles
percentiles <- quantile(df$salary_in_usd, probs = c(0.25, 0.75))
# Define the thresholds
low_threshold <- percentiles[1] # 25th percentile
high_threshold <- percentiles[2] # 75th percentile
# Create a new column based on percentiles
df$salary_classification <- ifelse(df$salary_in_usd < low_threshold, "Low",
ifelse(df$salary_in_usd > high_threshold, "High", "Medium"))
table(df$salary_classification)
High Low Medium
644 667 1357
# Get top 10 job titles and their value counts
top10_job_title <- head(sort(table(df$job_title), decreasing = TRUE), 10)
top10_job_title_df <- data.frame(job_title = names(top10_job_title), count = as.numeric(top10_job_title))
top10_job_title_df
NA
# Load the required packages
library(plotly)
# Define custom color palette
custom_colors <- c("#FF6361", "#FFA600", "#FFD700", "#FF76BC", "#69D2E7", "#6A0572", "#FF34B3", "#118AB2", "#FFFF99", "#FFC1CC")
# Create bar plot
fig <- plot_ly(data = top10_job_title_df, x = ~reorder(job_title, -count), y = ~count, type = "bar",
marker = list(color = custom_colors), text = ~count) %>%
layout(title = "Top 10 Job Titles", xaxis = list(title = "Job Titles"), yaxis = list(title = "Count"),
font = list(size = 17), template = "plotly_dark")
# Adjust layout settings to avoid label overlap
fig <- fig %>% layout(
margin = list(b = 150), # Increase bottom margin to provide space for labels
xaxis = list(
tickangle = 45, # Rotate x-axis tick labels
automargin = TRUE # Automatically adjust margins to avoid overlap
)
)
# Display the plot
fig
NA
NA
Our Dataset has 4 different experience categories: - EN: Entry-level / Junior - MI: Mid-level / Intermediate - SE: Senior-level / Expert - EX: Executive-level / Director
# Create a mapping of category abbreviations to full names
category_names_experience <- c("EN" = "Entry-level",
"MI" = "Mid-level",
"SE" = "Senior-level",
"EX" = "Executive-level")
# Get the sorted experience data
experience <- head(sort(table(df$experience_level), decreasing = TRUE))
# Replace the category names with full forms
names(experience) <- category_names_experience[names(experience)]
# Calculate the percentage for each category
percentages <- round(100 * experience / sum(experience), 2)
# Define a custom color palette
custom_colors <- c("#FFA998", "#FF76BC", "#69D2E7", "#FFA600")
# Create a pie chart with cute appearance
pie(experience, labels = paste(names(experience), "(", percentages, "%)"), col = custom_colors, border = "white", clockwise = TRUE, init.angle = 90)
# Add a legend with cute colors
legend("topright", legend = names(experience), fill = custom_colors, border = "white", cex = 0.8)
# Add a title with a cute font
title("Experience Distribution", font.main = 1)
# Create a mapping of category abbreviations to full names
category_names_company <- c("M" = "Medium",
"L" = "Large",
"S" = "Small"
)
# Get the sorted company size data
company_size <- head(sort(table(df$company_size), decreasing = TRUE))
# Replace the category names with full forms
names(company_size) <- category_names_company[names(company_size)]
# Set the maximum value for the y-axis
max_count <- max(company_size)
# Create a bar plot with adjusted y-axis limits
barplot(company_size, col = custom_colors, main = "Company Size Distribution", xlab = "Company Size", ylab = "Count", ylim = c(0, max_count + 10))
NA
NA
# Set the scipen option to a high value
options(scipen = 10)
# Create boxplot of salaries
bp <- boxplot(df$salary_in_usd / 1000,
col = "skyblue",
main = "Boxplot of Salaries",
ylab = "Salary in Thousands USD",
notch = TRUE)
# Get the sorted salary classification data
salary_classification <- sort(table(df$salary_classification), decreasing = TRUE)
salary_classification_df <- data.frame(salary_classification= names(salary_classification ), count = as.numeric(salary_classification ))
fig <- plot_ly(
data = salary_classification_df,
x = ~reorder(salary_classification, -count),
y = ~count,
type = "bar",
marker = list(color = custom_colors),
text = ~count,
width = 700,
height = 400
)
fig <- fig %>% layout(
title = "Salary Classification Distribution",
xaxis = list(title = "Salary Classification"),
yaxis = list(title = "Count"),
font = list(size = 17),
template = "ggplot2"
)
fig
NA
NA
NA
# Create a data frame with counts of experience levels by salary classification
experience_salary <- table(df$experience_level, df$salary_classification)
# Define custom colors for each experience level
custom_colors <- c("#69D2E7", "#1900ff", "#FF6361", "#FFD700")
# Create a data frame for the plot
plot_data <- data.frame(Experience = rownames(experience_salary),
Salary_Classification = colnames(experience_salary),
Count = as.vector(experience_salary))
# Convert Count column to numeric
plot_data$Count <- as.numeric(plot_data$Count)
# Create the bar plot
library(plotly)
fig <- plot_ly(data = plot_data, x = ~Salary_Classification, y = ~Count,
color = ~Experience, colors = custom_colors, type = "bar") %>%
layout(title = "Experience Level by Salary Classification",
xaxis = list(title = "Salary Classification"),
yaxis = list(title = "Count"),
font = list(size = 17),
template = "plotly_dark")
fig
NA
NA
# Calculate the percentiles
percentiles <- quantile(df$salary_in_usd, probs = c(0.5))
# Define the threshold
threshold <- percentiles[1] # 50th percentile
# Create a new column based on the threshold
df$salary_classification_Binary <- ifelse(df$salary_in_usd < threshold, "Low", "High")
# Display the table of salary classifications
table(df$salary_classification_Binary)
High Low
1334 1334
# Calculate the median of the column df$salary_in_usd
median_salary <- median(df$salary_in_usd, na.rm = TRUE)
print(median_salary)
[1] 127344
df$company_location <- ifelse(df$company_location == "US", "US", "Other")
df$employee_residence <- ifelse(df$employee_residence == "US", "US", "Other")
df$job_title <- ifelse(grepl("Data Science", df$job_title) | grepl("Data Scientist", df$job_title), "Data Scientist",
ifelse(grepl("Analyst", df$job_title) | grepl("Analytics", df$job_title), "Data Analyst",
ifelse(grepl("Data Engineer", df$job_title) | grepl("Data Engineering", df$job_title), "Data Engineer",
"Other")))
table(df$job_title)
Data Analyst Data Engineer Data Scientist Other
598 659 697 714
table(df$employee_residence)
Other US
768 1900
table(df$company_location)
Other US
732 1936
df <- data.frame(lapply(df, factor))
factors <- sapply(df, is.factor)
factor_cols <- names(df[factors])
factor_cols
[1] "work_year" "experience_level" "employment_type"
[4] "job_title" "salary_in_usd" "employee_residence"
[7] "remote_ratio" "company_location" "company_size"
[10] "salary_classification" "salary_classification_Binary"
# 3 - 58
set.seed(3) # Set a seed for reproducibility
train_indices <- sample(1:nrow(df), 0.9 * nrow(df)) # 80% for training
train_data <- df[train_indices, ]
test_data <- df[-train_indices, ]
# Separate the features (independent variables) from the target variable
X <- train_data[, !(names(train_data) %in% c("salary_in_usd", "salary_classification"))]
#X <- train_data[,c("experience_level","company_size","remote_ratio")]
Y <- train_data$salary_classification
library(nnet)
# Fit the logistic regression model
logistic_model <- multinom(Y ~ ., data = X)
# weights: 63 (40 variable)
initial value 2637.768105
iter 10 value 1688.858164
iter 20 value 1519.556578
iter 30 value 1447.811556
iter 40 value 1421.162829
iter 50 value 1419.866694
final value 1419.860084
converged
# Make predictions on the test data
test_data$predicted_classification <- predict(logistic_model, newdata = test_data)
# Evaluate model performance
library(caret)
confusion_matrix <- confusionMatrix(test_data$predicted_classification, test_data$salary_classification)
print(confusion_matrix)
Confusion Matrix and Statistics
Reference
Prediction High Low Medium
High 36 0 24
Low 0 52 18
Medium 25 10 102
Overall Statistics
Accuracy : 0.7116
95% CI : (0.6533, 0.7652)
No Information Rate : 0.5393
P-Value [Acc > NIR] : 0.000000006182
Kappa : 0.528
Mcnemar's Test P-Value : NA
Statistics by Class:
Class: High Class: Low Class: Medium
Sensitivity 0.5902 0.8387 0.7083
Specificity 0.8835 0.9122 0.7154
Pos Pred Value 0.6000 0.7429 0.7445
Neg Pred Value 0.8792 0.9492 0.6769
Prevalence 0.2285 0.2322 0.5393
Detection Rate 0.1348 0.1948 0.3820
Detection Prevalence 0.2247 0.2622 0.5131
Balanced Accuracy 0.7368 0.8755 0.7119
# Load the randomForest package
library(randomForest)
library(caret)
# Train the Random Forest classifier
rf_model <- randomForest(X, Y)
# Make predictions on new data
# Assuming you have a data frame called test_data with similar features as train_data
predictions <- predict(rf_model, test_data)
# Calculate accuracy
accuracy <- sum(predictions == test_data$salary_classification) / length(test_data$salary_classification)
cat("Accuracy:", accuracy, "\n")
Accuracy: 0.6966292
# Create confusion matrix
conf_matrix <- table(predictions, test_data$salary_classification)
cat("Confusion Matrix:\n")
Confusion Matrix:
print(conf_matrix)
predictions High Low Medium
High 35 0 26
Low 0 50 17
Medium 26 12 101
# Calculate precision, recall, and F1-score for each class
class_metrics <- caret::confusionMatrix(predictions, test_data$salary_classification)
cat("Class Metrics:\n")
Class Metrics:
print(class_metrics$byClass)
Sensitivity Specificity Pos Pred Value Neg Pred Value Precision Recall F1
Class: High 0.5737705 0.8737864 0.5737705 0.8737864 0.5737705 0.5737705 0.5737705
Class: Low 0.8064516 0.9170732 0.7462687 0.9400000 0.7462687 0.8064516 0.7751938
Class: Medium 0.7013889 0.6910569 0.7266187 0.6640625 0.7266187 0.7013889 0.7137809
Prevalence Detection Rate Detection Prevalence Balanced Accuracy
Class: High 0.2284644 0.1310861 0.2284644 0.7237784
Class: Low 0.2322097 0.1872659 0.2509363 0.8617624
Class: Medium 0.5393258 0.3782772 0.5205993 0.6962229
importance <- varImp(rf_model)
print(importance)
NA
NA
library(e1071)
# Train the SVM classifier
svm_model <- svm(Y ~ ., data = X, kernel = "radial")
# Make predictions on new data
# Assuming you have a data frame called test_data with similar features as train_data
predictions <- predict(svm_model, test_data)
# Evaluate the model
# Assuming you have the actual target variable values in test_data$salary_classification
accuracy <- sum(predictions == test_data$salary_classification) / length(test_data$salary_classification)
cat("Accuracy:", accuracy, "\n")
Accuracy: 0.6779026
# Create confusion matrix
conf_matrix <- table(predictions, test_data$salary_classification)
cat("Confusion Matrix:\n")
Confusion Matrix:
print(conf_matrix)
predictions High Low Medium
High 24 0 18
Low 0 48 17
Medium 37 14 109
library("rpart")
library("rpart.plot")
decision_tree <- rpart(Y ~ .,
data = X,
method="class")
# I only tried attributes with a limited number of unique values because using attributes like job_title and employee_residence caused the program to run endlessly.
# remote_ratio is the most useful variable for prediction
# Make predictions on test data
predictions <- predict(decision_tree, newdata = test_data, type = "class")
# Evaluate the model
accuracy <- sum(predictions == test_data$salary_classification) / nrow(test_data)
print(paste("Accuracy:", accuracy))
[1] "Accuracy: 0.700374531835206"
rpart.plot(decision_tree)
Major Challenges and Solutions
Conclusion and Future Work
References
The Data Scientist Job Outlook in 2023 | 365 Data Science
Burtch-Works-Study_DS-PAP-2019.pdf (burtchworks.com)
New Visier Report Reveals 79% of Employees Want Pay Transparency (prnewswire.com)
More NA organizations plan to disclose pay information - WTW (wtwco.com)